33 research outputs found
Pyramid: Enhancing Selectivity in Big Data Protection with Count Featurization
Protecting vast quantities of data poses a daunting challenge for the growing
number of organizations that collect, stockpile, and monetize it. The ability
to distinguish data that is actually needed from data collected "just in case"
would help these organizations to limit the latter's exposure to attack. A
natural approach might be to monitor data use and retain only the working-set
of in-use data in accessible storage; unused data can be evicted to a highly
protected store. However, many of today's big data applications rely on machine
learning (ML) workloads that are periodically retrained by accessing, and thus
exposing to attack, the entire data store. Training set minimization methods,
such as count featurization, are often used to limit the data needed to train
ML workloads to improve performance or scalability. We present Pyramid, a
limited-exposure data management system that builds upon count featurization to
enhance data protection. As such, Pyramid uniquely introduces both the idea and
proof-of-concept for leveraging training set minimization methods to instill
rigor and selectivity into big data management. We integrated Pyramid into
Spark Velox, a framework for ML-based targeting and personalization. We
evaluate it on three applications and show that Pyramid approaches
state-of-the-art models while training on less than 1% of the raw data
Storage and Search in Dynamic Peer-to-Peer Networks
We study robust and efficient distributed algorithms for searching, storing,
and maintaining data in dynamic Peer-to-Peer (P2P) networks. P2P networks are
highly dynamic networks that experience heavy node churn (i.e., nodes join and
leave the network continuously over time). Our goal is to guarantee, despite
high node churn rate, that a large number of nodes in the network can store,
retrieve, and maintain a large number of data items. Our main contributions are
fast randomized distributed algorithms that guarantee the above with high
probability (whp) even under high adversarial churn:
1. A randomized distributed search algorithm that (whp) guarantees that
searches from as many as nodes ( is the stable network size)
succeed in -rounds despite churn, for
any small constant , per round. We assume that the churn is
controlled by an oblivious adversary (that has complete knowledge and control
of what nodes join and leave and at what time, but is oblivious to the random
choices made by the algorithm).
2. A storage and maintenance algorithm that guarantees (whp) data items can
be efficiently stored (with only copies of each data item)
and maintained in a dynamic P2P network with churn rate up to
per round. Our search algorithm together with our
storage and maintenance algorithm guarantees that as many as nodes
can efficiently store, maintain, and search even under churn per round. Our algorithms require only polylogarithmic in bits to
be processed and sent (per round) by each node.
To the best of our knowledge, our algorithms are the first-known,
fully-distributed storage and search algorithms that provably work under highly
dynamic settings (i.e., high churn rates per step).Comment: to appear at SPAA 201
XRay: Enhancing the Web's Transparency with Differential Correlation
Today's Web services - such as Google, Amazon, and Facebook - leverage user
data for varied purposes, including personalizing recommendations, targeting
advertisements, and adjusting prices. At present, users have little insight
into how their data is being used. Hence, they cannot make informed choices
about the services they choose. To increase transparency, we developed XRay,
the first fine-grained, robust, and scalable personal data tracking system for
the Web. XRay predicts which data in an arbitrary Web account (such as emails,
searches, or viewed products) is being used to target which outputs (such as
ads, recommended products, or prices). XRay's core functions are service
agnostic and easy to instantiate for new services, and they can track data
within and across services. To make predictions independent of the audited
service, XRay relies on the following insight: by comparing outputs from
different accounts with similar, but not identical, subsets of data, one can
pinpoint targeting through correlation. We show both theoretically, and through
experiments on Gmail, Amazon, and YouTube, that XRay achieves high precision
and recall by correlating data from a surprisingly small number of extra
accounts.Comment: Extended version of a paper presented at the 23rd USENIX Security
Symposium (USENIX Security 14
Web Transparency for Complex Targeting: Algorithms, Limits, and Tradeoffs
International audienceBig Data promises important societal progress but exacerbates the need for due process and accountability. Companies and institutions can now discriminate between users at an individual level using collected data or past behavior. Worse, today they can do so in near perfect opacity. The nascent field of web transparency aims to develop the tools and methods necessary to reveal how information is used, however today it lacks robust tools that let users and investigators identify targeting using multiple inputs. Here, we formalize for the first time the problem of detecting and identifying targeting on combinations of inputs and provide the first algorithm that is asymptotically exact. This algorithm is designed to serve as a theoretical foundational block to build future scalable and robust web transparency tools. It offers three key properties. First, our algorithm is service agnostic and applies to a variety of settings under a broad set of assumptions. Second, our algorithm's analysis delineates a theoretical detection limit that characterizes which forms of targeting can be distinguished from noise and which cannot. Third, our algorithm establishes fundamental tradeoffs that lead the way to new metrics for the science of web transparency. Understanding the tradeoff between effective targeting and targeting concealment lets us determine under which conditions predatory targeting can be made unprofitable by transparency tools
Vers une plus grande transparence du Web
International audienceDe plus en plus les géants du Web (Amazon, Google et Twitter en tête) recourent a la manne des « Big data » : ils collectent une myriade de données qu'ils exploitent pour leurs algorithmes de recommandation personnalisée et leurs campagnes publicitaires. Pareilles méthodes peuvent considérablement améliorer les services rendus a leurs utilisateurs, mais leur opacité fait débat. En effet, il n'existe pas a ce jour d'outil suffisamment robuste qui puisse tracer sur le Web l'usage des données et des informations sur un utilisateur par des services en ligne. Motivés par ce manque de transparence, nous avons développé un prototype du nom d'XRay, et qui peut prédire quelle donnée parmi toutes celles présentes dans un compte utilisateur est responsable de la réception d'une publicité. Dans cet article, nous présentons son principe ainsi que les résultats de nos premières expérimentations. Nous introduisons dans le même temps le tout premier modèle théorique pour le problème de la transparence du Web, et nous interprétons les performances d'Xray a la lumière de nos résultats obtenus dans ce modèle. En particulier, nous démontrons qu'un nombre θ(log N) de comptes utilisateurs auxiliaires, remplis selon un procédé aléatoire , suffisent a déterminer quelle donnée parmi les N en présence a causé la réception d'une publicité. Nous aborderons brièvement les extensions possibles, et quelques problèmes ouverts
Packing Privacy Budget Efficiently
Machine learning (ML) models can leak information about users, and
differential privacy (DP) provides a rigorous way to bound that leakage under a
given budget. This DP budget can be regarded as a new type of compute resource
in workloads of multiple ML models training on user data. Once it is used, the
DP budget is forever consumed. Therefore, it is crucial to allocate it most
efficiently to train as many models as possible. This paper presents the
scheduler for privacy that optimizes for efficiency. We formulate privacy
scheduling as a new type of multidimensional knapsack problem, called privacy
knapsack, which maximizes DP budget efficiency. We show that privacy knapsack
is NP-hard, hence practical algorithms are necessarily approximate. We develop
an approximation algorithm for privacy knapsack, DPK, and evaluate it on
microbenchmarks and on a new, synthetic private-ML workload we developed from
the Alibaba ML cluster trace. We show that DPK: (1) often approaches the
efficiency-optimal schedule, (2) consistently schedules more tasks compared to
a state-of-the-art privacy scheduling algorithm that focused on fairness
(1.3-1.7x in Alibaba, 1.0-2.6x in microbenchmarks), but (3) sacrifices some
level of fairness for efficiency. Therefore, using DPK, DP ML operators should
be able to train more models on the same amount of user data while offering the
same privacy guarantee to their users